This supplementary information presents :

  • first, the code to generate the figures from the paper,
  • second, some control experiments that were mentionned in the paper,
  • finally, some perspectives for future work inspired by the algorithms presented in the paper.

Figures for "An adaptive algorithm for unsupervised learning"

In [1]:
%load_ext autoreload
%autoreload 2

A convenience script model.py allows to run and cache most learning items in this notebooks:

In [2]:
%run model.py
tag = HULK
n_jobs = 0
In [3]:
from shl_scripts.shl_experiments import SHL
shl = SHL(**opts)
data = shl.get_data(matname=tag)
In [4]:
shl?
Type:        SHL
String form: <shl_scripts.shl_experiments.SHL object at 0x10ff1f748>
File:        ~/science/HULK/SparseHebbianLearning/shl_scripts/shl_experiments.py
Docstring:  
Base class to define SHL experiments:
    - initialization
    - coding and learning
    - visualization
    - quantitative analysis
In [5]:
print('# of pixels per patch =', shl.patch_width**2)
# of pixels per patch = 441
In [6]:
print('number of patches, size of patches = ', data.shape)
print('average of patches = ', data.mean(), ' +/- ', data.mean(axis=1).std())
SE = np.sqrt(np.mean(data**2, axis=1))
print('average energy of data = ', SE.mean(), '+/-', SE.std())
number of patches, size of patches =  (65520, 441)
average of patches =  -4.1888600727021664e-05  +/-  0.006270387629074682
average energy of data =  0.26082782604823146 +/- 0.07415089441760706
In [7]:
#!ls -l {shl.cache_dir}/
#!ls -l {shl.cache_dir}/{tag}*
!ls {shl.cache_dir}/{tag}*lock*
#!rm {shl.cache_dir}/{tag}*lock*
#!rm {shl.cache_dir}/{tag}*
#!ls -l {shl.cache_dir}/{tag}*
cache_dir/HULK_None_alpha_homeo=0.62500_dico.pkl_lock
cache_dir/HULK_None_alpha_homeo=0.62500_dico.pkl_lock_pid-39738_host-fortytwo
cache_dir/HULK_None_alpha_homeo=0.88388_dico.pkl_lock
cache_dir/HULK_None_alpha_homeo=0.88388_dico.pkl_lock_pid-39753_host-fortytwo
cache_dir/HULK_None_alpha_homeo=1.25000_dico.pkl_lock
cache_dir/HULK_None_alpha_homeo=1.25000_dico.pkl_lock_pid-39743_host-fortytwo
cache_dir/HULK_None_alpha_homeo=1.76777_dico.pkl_lock
cache_dir/HULK_None_alpha_homeo=1.76777_dico.pkl_lock_pid-39751_host-fortytwo
cache_dir/HULK_None_alpha_homeo=10.00000_dico.pkl_lock
cache_dir/HULK_None_alpha_homeo=10.00000_dico.pkl_lock_pid-42012_host-fortytwo
cache_dir/HULK_None_alpha_homeo=2.50000_dico.pkl_lock
cache_dir/HULK_None_alpha_homeo=2.50000_dico.pkl_lock_pid-39752_host-fortytwo
cache_dir/HULK_None_alpha_homeo=3.53553_dico.pkl_lock
cache_dir/HULK_None_alpha_homeo=3.53553_dico.pkl_lock_pid-38355_host-fortytwo
cache_dir/HULK_None_alpha_homeo=5.00000_dico.pkl_lock
cache_dir/HULK_None_alpha_homeo=5.00000_dico.pkl_lock_pid-42019_host-fortytwo
cache_dir/HULK_None_alpha_homeo=7.07107_dico.pkl_lock
cache_dir/HULK_None_alpha_homeo=7.07107_dico.pkl_lock_pid-42017_host-fortytwo
cache_dir/HULK_None_l0_sparseness=10_dico.pkl_lock
cache_dir/HULK_None_l0_sparseness=10_dico.pkl_lock_pid-42003_host-fortytwo
cache_dir/HULK_None_l0_sparseness=116_dico.pkl_lock
cache_dir/HULK_None_l0_sparseness=116_dico.pkl_lock_pid-42018_host-fortytwo
cache_dir/HULK_None_l0_sparseness=14_dico.pkl_lock
cache_dir/HULK_None_l0_sparseness=14_dico.pkl_lock_pid-42008_host-fortytwo
cache_dir/HULK_None_l0_sparseness=20_dico.pkl_lock
cache_dir/HULK_None_l0_sparseness=20_dico.pkl_lock_pid-42016_host-fortytwo
cache_dir/HULK_None_l0_sparseness=29_dico.pkl_lock
cache_dir/HULK_None_l0_sparseness=29_dico.pkl_lock_pid-42007_host-fortytwo
cache_dir/HULK_None_l0_sparseness=41_dico.pkl_lock
cache_dir/HULK_None_l0_sparseness=41_dico.pkl_lock_pid-42004_host-fortytwo
cache_dir/HULK_None_l0_sparseness=58_dico.pkl_lock
cache_dir/HULK_None_l0_sparseness=58_dico.pkl_lock_pid-42015_host-fortytwo
cache_dir/HULK_None_l0_sparseness=7_dico.pkl_lock
cache_dir/HULK_None_l0_sparseness=7_dico.pkl_lock_pid-42011_host-fortytwo
cache_dir/HULK_None_l0_sparseness=82_dico.pkl_lock
cache_dir/HULK_None_l0_sparseness=82_dico.pkl_lock_pid-42013_host-fortytwo
cache_dir/HULK_None_n_dictionary=1156_dico.pkl_lock
cache_dir/HULK_None_n_dictionary=1156_dico.pkl_lock_pid-42014_host-fortytwo
cache_dir/HULK_None_n_dictionary=1634_dico.pkl_lock
cache_dir/HULK_None_n_dictionary=1634_dico.pkl_lock_pid-42020_host-fortytwo
cache_dir/HULK_None_n_dictionary=2312_dico.pkl_lock
cache_dir/HULK_None_n_dictionary=2312_dico.pkl_lock_pid-42010_host-fortytwo
cache_dir/HULK_None_n_dictionary=3269_dico.pkl_lock
cache_dir/HULK_None_n_dictionary=3269_dico.pkl_lock_pid-42006_host-fortytwo
cache_dir/HULK_None_n_dictionary=4624_dico.pkl_lock
cache_dir/HULK_None_n_dictionary=4624_dico.pkl_lock_pid-42009_host-fortytwo
cache_dir/HULK_None_n_dictionary=817_dico.pkl_lock
cache_dir/HULK_None_n_dictionary=817_dico.pkl_lock_pid-42005_host-fortytwo
cache_dir/HULK_vanilla_dico.pkl_lock
cache_dir/HULK_vanilla_dico.pkl_lock_pid-3310_host-fortytwo

figure 1: Role of homeostasis in learning sparse representations

TODO : cross-validate with 10 different learnings

In [8]:
fname = 'figure_map'
N_cv = 10
one_cv = 9 # picking one to display intermediate results

learning

The actual learning is done in a second object (here dico) from which we can access another set of properties and functions (see the shl_learn.py script):

In [9]:
homeo_methods = ['None', 'OLS', 'HEH']

list_figures = ['show_dico', 'time_plot_error', 'time_plot_logL', 'time_plot_MC', 'show_Pcum']
list_figures = []
dico = {}
for i_cv in range(N_cv):
    dico[i_cv] = {}
    for homeo_method in homeo_methods:
        shl = SHL(homeo_method=homeo_method, seed=seed+i_cv, **opts)
        dico[i_cv][homeo_method] = shl.learn_dico(data=data, list_figures=list_figures, matname=tag + '_' + homeo_method + '_seed=' + str(seed+i_cv))
In [10]:
list_figures = ['show_dico']
for i_cv in [one_cv]:
    for homeo_method in homeo_methods:
        print(hl + hs + homeo_method[:3] + hs + hl)
        shl = SHL(homeo_method=homeo_method, seed=seed+i_cv, **opts)
        shl.learn_dico(data=data, list_figures=list_figures, matname=tag + '_' + homeo_method + '_seed=' + str(seed+i_cv))

        print('size of dictionary = (number of filters, size of imagelets) = ', dico[i_cv][homeo_method].dictionary.shape)
        print('average of filters = ',  dico[i_cv][homeo_method].dictionary.mean(axis=1).mean(), 
              '+/-',  dico[i_cv][homeo_method].dictionary.mean(axis=1).std())
        SE = np.sqrt(np.sum(dico[i_cv][homeo_method].dictionary**2, axis=1))
        print('average energy of filters = ', SE.mean(), '+/-', SE.std())
        plt.show()
----------          Non          ----------
size of dictionary = (number of filters, size of imagelets) =  (1156, 441)
average of filters =  1.900059349020534e-05 +/- 0.001070034202528707
average energy of filters =  1.0 +/- 3.3618957751613276e-17
----------          OLS          ----------
size of dictionary = (number of filters, size of imagelets) =  (1156, 441)
average of filters =  -0.0001454346802937167 +/- 0.0005660352600313756
average energy of filters =  1.0 +/- 4.1303923159725894e-17
----------          HEH          ----------
size of dictionary = (number of filters, size of imagelets) =  (1156, 441)
average of filters =  -3.3585129307708163e-06 +/- 0.0009985248463406427
average energy of filters =  1.0 +/- 4.3930950534446806e-17

panel A: plotting some dictionaries

In [11]:
pname = '/tmp/panel_A' #pname = fname + '_A'
In [12]:
from shl_scripts import show_dico
if DEBUG: show_dico(shl, dico[one_cvi_cv][homeo_method], data=data, dim_graph=(2,5))
In [13]:
dim_graph = (2, 9)
colors = ['black', 'orange', 'blue']
homeo_methods
Out[13]:
['None', 'OLS', 'HEH']
In [14]:
%run model.py
tag = HULK
n_jobs = 0
In [15]:
subplotpars = dict(left=0.042, right=1., bottom=0., top=1., wspace=0.05, hspace=0.05,)
fig, axs = plt.subplots(3, 1, figsize=(fig_width/2, fig_width/(1+phi)), gridspec_kw=subplotpars)

for ax, color, homeo_method in zip(axs.ravel(), colors, homeo_methods): 
    ax.axis(c=color, lw=2, axisbg='w')
    ax.set_facecolor('w')
    fig, ax = show_dico(shl, dico[one_cv][homeo_method], data=data, dim_graph=dim_graph, fig=fig, ax=ax)
    # ax.set_ylabel(homeo_method)
    ax.text(-11, 11*dim_graph[0], homeo_method, fontsize=12, color=color, rotation=90)#, backgroundcolor='white'

for ext in FORMATS: fig.savefig(pname + ext, dpi=dpi_export, bbox_inches='tight')
findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.
In [16]:
### TODO put the p_min an p_max value in the filter map
In [17]:
if DEBUG: Image(pname +'.png')
In [18]:
if DEBUG: help(fig.subplots_adjust)
In [19]:
if DEBUG: help(plt.subplots)
In [20]:
if DEBUG: help(matplotlib.gridspec.GridSpec)

panel B: quantitative comparison

In [21]:
pname = '/tmp/panel_B' #fname + '_B'
In [22]:
from shl_scripts import time_plot
variable = 'F'
alpha_0, alpha = .3, .15
subplotpars = dict(left=0.2, right=.95, bottom=0.2, top=.95)#, wspace=0.05, hspace=0.05,)
fig, ax = plt.subplots(1, 1, figsize=(fig_width/2, fig_width/(1+phi)), gridspec_kw=subplotpars)
for i_cv in range(N_cv):
    for color, homeo_method in zip(colors, homeo_methods): 
        ax.axis(c='b', lw=2, axisbg='w')
        ax.set_facecolor('w')
        if i_cv==0:
            fig, ax = time_plot(shl, dico[i_cv][homeo_method], variable=variable, unit='bits', color=color, label=homeo_method, alpha=alpha_0, fig=fig, ax=ax)
        else:
            fig, ax = time_plot(shl, dico[i_cv][homeo_method], variable=variable, unit='bits', color=color, alpha=alpha, fig=fig, ax=ax)        
        # ax.set_ylabel(homeo_method)
        #ax.text(-8, 7*dim_graph[0], homeo_method, fontsize=12, color='k', rotation=90)#, backgroundcolor='white'
ax.legend(loc='best')
for ext in FORMATS: fig.savefig(pname + ext, dpi=dpi_export, bbox_inches='tight')
if DEBUG: Image(pname +'.png')
findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.

Montage of the subplots

In [23]:
import tikzmagic
In [24]:
%load_ext tikzmagic
In [25]:
#DEBUG = True
if DEBUG: help(tikzmagic)
%tikz \draw (0,0) rectangle (1,1);%%tikz --save {fname}.pdf \draw[white, fill=white] (0.\linewidth,0) rectangle (1.\linewidth, .382\linewidth) ;
In [26]:
%%tikz -f pdf --save {fname}.pdf
\draw[white, fill=white] (0.\linewidth,0) rectangle (1.\linewidth, .382\linewidth) ;
\draw [anchor=north west] (.0\linewidth, .382\linewidth) node {\includegraphics[width=.5\linewidth]{/tmp/panel_A}};
\draw [anchor=north west] (.5\linewidth, .382\linewidth) node {\includegraphics[width=.5\linewidth]{/tmp/panel_B}};
\begin{scope}[font=\bf\sffamily\large]
\draw [anchor=west,fill=white] (.0\linewidth, .382\linewidth) node [above right=-3mm] {$\mathsf{A}$};
\draw [anchor=west,fill=white] (.53\linewidth, .382\linewidth) node [above right=-3mm] {$\mathsf{B}$};
\end{scope}
In [27]:
!convert  -density {dpi_export} {fname}.pdf {fname}.jpg
!convert  -density {dpi_export} {fname}.pdf {fname}.png
#!convert  -density {dpi_export} -resize 5400  -units pixelsperinch -flatten  -compress lzw  -depth 8 {fname}.pdf {fname}.tiff
Image(fname +'.png')
Out[27]:
!echo "width=" ; convert {fname}.tiff -format "%[fx:w]" info: !echo ", \nheight=" ; convert {fname}.tiff -format "%[fx:h]" info: !echo ", \nunit=" ; convert {fname}.tiff -format "%U" info:!identify {fname}.tiff

figure 2: Histogram Equalization Homeostasis

In [28]:
fname = 'figure_HEH'

First collecting data:

In [29]:
list_figures = ['show_Pcum']

dico = {}
for homeo_method in homeo_methods:
    print(hl + hs + homeo_method + hs + hl)
    shl = SHL(homeo_method=homeo_method, **opts)
    #dico[homeo_method] = shl.learn_dico(data=data, list_figures=list_figures, matname=tag + '_' + homeo_method + '_' + str(one_cv))
    dico[homeo_method] = shl.learn_dico(data=data, list_figures=list_figures, matname=tag + '_' + homeo_method + '_seed=' + str(seed+one_cv))
    plt.show()
----------          None          ----------
----------          OLS          ----------
----------          HEH          ----------
----------          HAP          ----------
----------          EMP          ----------
In [30]:
dico[homeo_method].P_cum.shape
Out[30]:
(1156, 128)

panel A: different P_cum

In [31]:
pname = '/tmp/panel_A' #pname = fname + '_A'

from shl_scripts import plot_P_cum
variable = 'F'
subplotpars = dict(left=0.2, right=.95, bottom=0.2, top=.95)#, wspace=0.05, hspace=0.05,)
fig, ax = plt.subplots(1, 1, figsize=(fig_width/2, fig_width/(1+phi)), gridspec_kw=subplotpars)
for color, homeo_method in zip(colors, homeo_methods): 
    ax.axis(c='b', lw=2, axisbg='w')
    ax.set_facecolor('w')
    fig, ax = plot_P_cum(dico[homeo_method].P_cum, ymin=0.96, ymax=1.001, 
                         title=None, suptitle=None, ylabel='non-linear functions', 
                         verbose=False, n_yticks=21, alpha=.02, c=color, fig=fig, ax=ax)
    ax.plot([0], [0], lw=1, color=color, label=homeo_method, alpha=.6)
    # ax.set_ylabel(homeo_method)
    #ax.text(-8, 7*dim_graph[0], homeo_method, fontsize=12, color='k', rotation=90)#, backgroundcolor='white'
ax.legend(loc='lower right')
for ext in FORMATS: fig.savefig(pname + ext, dpi=dpi_export, bbox_inches='tight')
if DEBUG: Image(pname +'.png')
In [32]:
if DEBUG: help(fig.legend)

panel B: comparing the effects of parameters

In [33]:
pname = '/tmp/panel_B' #fname + '_B'
n_jobs = 1

from shl_scripts.shl_experiments import SHL_set
homeo_methods = ['None', 'OLS', 'HEH']
variables = ['eta', 'eta_homeo']

list_figures = []

for homeo_method in homeo_methods:
    opts_ = opts.copy()
    opts_.update(homeo_method=homeo_method)
    experiments = SHL_set(opts_, tag=tag + '_' + homeo_method, base=10)
    experiments.run(variables=variables, n_jobs=n_jobs, verbose=0)

import matplotlib.pyplot as plt
subplotpars = dict(left=0.2, right=.95, bottom=0.2, top=.95, wspace=0.5, hspace=0.35,)

x, y = .05, .8 #-.3

fig, axs = plt.subplots(len(variables), 1, figsize=(fig_width/2, fig_width/(1+phi)), gridspec_kw=subplotpars, sharey=True)

for i_ax, variable in enumerate(variables):
    for color, homeo_method in zip(colors, homeo_methods): 
        opts_ = opts.copy()
        opts_.update(homeo_method=homeo_method)
        experiments = SHL_set(opts_, tag=tag + '_' + homeo_method, base=10)
        fig, axs[i_ax] = experiments.scan(variable=variable, list_figures=[], display='final', fig=fig, ax=axs[i_ax], color=color, display_variable='F', verbose=0) #, label=homeo_metho
        axs[i_ax].set_xlabel('') #variable
        axs[i_ax].text(x, y,  variable, transform=axs[i_ax].transAxes) 
        #axs[i_ax].get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())

#fig.legend(loc='lower right')
for ext in FORMATS: fig.savefig(pname + ext, dpi=dpi_export, bbox_inches='tight')
if DEBUG: Image(pname +'.png')

Montage of the subplots

In [34]:
%%tikz -f pdf --save {fname}.pdf
\draw[white, fill=white] (0.\linewidth,0) rectangle (1.\linewidth, .382\linewidth) ;
\draw [anchor=north west] (.0\linewidth, .382\linewidth) node {\includegraphics[width=.5\linewidth]{/tmp/panel_A.pdf}};
\draw [anchor=north west] (.5\linewidth, .382\linewidth) node {\includegraphics[width=.5\linewidth]{/tmp/panel_B.pdf}};
\begin{scope}[font=\bf\sffamily\large]
\draw [anchor=west,fill=white] (.0\linewidth, .382\linewidth) node [above right=-3mm] {$\mathsf{A}$};
\draw [anchor=west,fill=white] (.53\linewidth, .382\linewidth) node [above right=-3mm] {$\mathsf{B}$};
\end{scope}
In [35]:
!convert  -density {dpi_export} {fname}.pdf {fname}.jpg
!convert  -density {dpi_export} {fname}.pdf {fname}.png
#!convert  -density {dpi_export} -resize 5400  -units pixelsperinch -flatten  -compress lzw  -depth 8 {fname}.pdf {fname}.tiff
Image(fname +'.png')
Out[35]:
!echo "width=" ; convert {fname}.tiff -format "%[fx:w]" info: !echo ", \nheight=" ; convert {fname}.tiff -format "%[fx:h]" info: !echo ", \nunit=" ; convert {fname}.tiff -format "%U" info:!identify {fname}.tiff

figure 3:

learning

In [36]:
fname = 'figure_HAP'
In [37]:
colors = ['orange', 'blue', 'red', 'green']
homeo_methods = ['OLS', 'HEH', 'EMP', 'HAP']
list_figures = []
dico = {}
for i_cv in range(N_cv):
    dico[i_cv] = {}
    for homeo_method in homeo_methods:
        shl = SHL(homeo_method=homeo_method, seed=seed+i_cv, **opts)
        dico[i_cv][homeo_method] = shl.learn_dico(data=data, list_figures=list_figures, matname=tag + '_' + homeo_method + '_seed=' + str(seed+i_cv))

list_figures = ['show_dico'] if DEBUG else []
for i_cv in [one_cv]:
    for homeo_method in homeo_methods:
        print(hl + hs + homeo_method + hs + hl)
        shl = SHL(homeo_method=homeo_method, seed=seed+i_cv, **opts)
        shl.learn_dico(data=data, list_figures=list_figures, matname=tag + '_' + homeo_method + '_seed=' + str(seed+i_cv))
        plt.show()
        print('size of dictionary = (number of filters, size of imagelets) = ', dico[i_cv][homeo_method].dictionary.shape)
        print('average of filters = ',  dico[i_cv][homeo_method].dictionary.mean(axis=1).mean(), 
              '+/-',  dico[i_cv][homeo_method].dictionary.mean(axis=1).std())
        SE = np.sqrt(np.sum(dico[i_cv][homeo_method].dictionary**2, axis=1))
        print('average energy of filters = ', SE.mean(), '+/-', SE.std())
----------          OLS          ----------
size of dictionary = (number of filters, size of imagelets) =  (1156, 441)
average of filters =  -0.0001454346802937167 +/- 0.0005660352600313756
average energy of filters =  1.0 +/- 4.1303923159725894e-17
----------          HEH          ----------
size of dictionary = (number of filters, size of imagelets) =  (1156, 441)
average of filters =  -3.3585129307708163e-06 +/- 0.0009985248463406427
average energy of filters =  1.0 +/- 4.3930950534446806e-17
----------          EMP          ----------
size of dictionary = (number of filters, size of imagelets) =  (1156, 441)
average of filters =  -2.17993162670573e-05 +/- 0.001060414472941883
average energy of filters =  1.0 +/- 3.4557341446777685e-17
----------          HAP          ----------
size of dictionary = (number of filters, size of imagelets) =  (1156, 441)
average of filters =  2.233321031657867e-05 +/- 0.0009511072012279204
average energy of filters =  1.0 +/- 3.9992351632662305e-17

panel A: plotting some dictionaries

In [38]:
pname = '/tmp/panel_A' #pname = fname + '_A'
In [39]:
subplotpars = dict( left=0.042, right=1., bottom=0., top=1., wspace=0.05, hspace=0.05,)
fig, axs = plt.subplots(3, 1, figsize=(fig_width/2, fig_width/(1+phi)), gridspec_kw=subplotpars)

for ax, color, homeo_method in zip(axs.ravel(), colors[1:], homeo_methods[1:]): 
    ax.axis(c=color, lw=2, axisbg='w')
    ax.set_facecolor('w')
    from shl_scripts import show_dico
    fig, ax = show_dico(shl, dico[one_cv][homeo_method], data=data, dim_graph=dim_graph, fig=fig, ax=ax)
    # ax.set_ylabel(homeo_method)
    ax.text(-8, 7*dim_graph[0], homeo_method, fontsize=12, color=color, rotation=90)#, backgroundcolor='white'

for ext in FORMATS: fig.savefig(pname + ext, dpi=dpi_export, bbox_inches='tight')
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-39-01d985131385> in <module>
      7     ax.set_facecolor('w')
      8     from shl_scripts import show_dico
----> 9     fig, ax = show_dico(shl, dico[one_cv][homeo_method], data=data, dim_graph=dim_graph, fig=fig, ax=ax)
     10     # ax.set_ylabel(homeo_method)
     11     ax.text(-8, 7*dim_graph[0], homeo_method, fontsize=12, color=color, rotation=90)#, backgroundcolor='white'

~/science/HULK/SparseHebbianLearning/shl_scripts/shl_tools.py in show_dico(shl_exp, dico, data, order, title, dim_graph, seed, do_tiles, fname, fig, ax, do_show_proba, **kwargs)
    379 
    380     if order == 'minmax':
--> 381         sparse_code = shl_exp.code(data=data, dico=dico, P_cum=shl_exp.P_cum)#, gain=shl_exp.gain)
    382         if True:
    383             # order by activation probability

~/science/HULK/SparseHebbianLearning/shl_scripts/shl_experiments.py in code(self, data, dico, coding_algorithm, matname, P_cum, fit_tol, l0_sparseness, gain)
    175                                         algorithm=self.learning_algorithm,
    176                                         P_cum=P_cum, do_sym=self.do_sym, verbose=0,
--> 177                                         gain=gain)
    178 
    179         else:

~/science/HULK/SparseHebbianLearning/shl_scripts/shl_encode.py in sparse_encode(X, dictionary, precision, algorithm, fit_tol, P_cum, l0_sparseness, C, do_sym, verbose, gain, alpha_MP)
    125         sparse_code = mp(X, dictionary, precision, l0_sparseness=l0_sparseness,
    126                          fit_tol=fit_tol, P_cum=P_cum, C=C, do_sym=do_sym,
--> 127                          verbose=verbose, gain=gain, alpha_MP=alpha_MP)
    128     else:
    129         raise ValueError('Sparse coding method must be "mp", "lasso_lars" '

~/science/HULK/SparseHebbianLearning/shl_scripts/shl_encode.py in mp(X, dictionary, precision, l0_sparseness, fit_tol, alpha_MP, do_sym, P_cum, do_fast, C, verbose, gain)
    300             # else:
    301             # scaled ReLu on the correlation coefficient
--> 302             q = rectify(corr / squared_norm[np.newaxis, :], do_sym=do_sym) * gain  # size (K, N)
    303 
    304             ind = np.argmax(q, axis=1) # size (K,)

~/science/HULK/SparseHebbianLearning/shl_scripts/shl_encode.py in rectify(code, do_sym, verbose)
    143     else:
    144         # ReLU
--> 145         return code*(code>0)
    146 
    147 

KeyboardInterrupt: 
Error in callback <function flush_figures at 0x10feedd90> (for post_execute):
KeyboardInterrupt

panel B: quantitative comparison

In [ ]:
pname = '/tmp/panel_B' #fname + '_B'
In [ ]:
from shl_scripts import time_plot
variable = 'F'
alpha = .3
subplotpars = dict(left=0.2, right=.95, bottom=0.2, top=.95)#, wspace=0.05, hspace=0.05,)
fig, ax = plt.subplots(1, 1, figsize=(fig_width/2, fig_width/(1+phi)), gridspec_kw=subplotpars)
for i_cv in range(N_cv):
    for color, homeo_method in zip(colors, homeo_methods): 
        ax.axis(c='b', lw=2, axisbg='w')
        ax.set_facecolor('w')
        if i_cv==0:
            fig, ax = time_plot(shl, dico[i_cv][homeo_method], variable=variable, unit='bits', color=color, label=homeo_method, alpha=alpha_0, fig=fig, ax=ax)
        else:
            fig, ax = time_plot(shl, dico[i_cv][homeo_method], variable=variable, unit='bits', color=color, alpha=alpha, fig=fig, ax=ax)        
ax.legend(loc='best')
for ext in FORMATS: fig.savefig(pname + ext, dpi=dpi_export, bbox_inches='tight')
if DEBUG: Image(pname +'.png')    
In [ ]:
if DEBUG: Image(pname +'.png')

Montage of the subplots

In [ ]:
%%tikz -f pdf --save {fname}.pdf
\draw[white, fill=white] (0.\linewidth,0) rectangle (1.\linewidth, .382\linewidth) ;
\draw [anchor=north west] (.0\linewidth, .382\linewidth) node {\includegraphics[width=.5\linewidth]{/tmp/panel_A}};
\draw [anchor=north west] (.5\linewidth, .382\linewidth) node {\includegraphics[width=.5\linewidth]{/tmp/panel_B}};
\begin{scope}[font=\bf\sffamily\large]
\draw [anchor=west,fill=white] (.0\linewidth, .382\linewidth) node [above right=-3mm] {$\mathsf{A}$};
\draw [anchor=west,fill=white] (.53\linewidth, .382\linewidth) node [above right=-3mm] {$\mathsf{B}$};
\end{scope}
In [ ]:
!convert  -density {dpi_export} {fname}.pdf {fname}.jpg
!convert  -density {dpi_export} {fname}.pdf {fname}.png
#!convert  -density {dpi_export} -resize 5400  -units pixelsperinch -flatten  -compress lzw  -depth 8 {fname}.pdf {fname}.tiff
Image(fname +'.png')
!echo "width=" ; convert {fname}.tiff -format "%[fx:w]" info: !echo ", \nheight=" ; convert {fname}.tiff -format "%[fx:h]" info: !echo ", \nunit=" ; convert {fname}.tiff -format "%U" info:!identify {fname}.tiff

figure 4: Convolutional Neural Network

In [ ]:
fname = 'figure_CNN'
In [ ]:
!rm -fr /tmp/database/Face_DataBase
!mkdir -p /tmp/database && rsync -a "/Users/laurentperrinet/science/VB_These/Rapport d'avancement/database/Face_DataBase" /tmp/database/
#!mkdir -p /tmp/database/ && rsync -a "/Users/laurentperrinet/science/VB_These/Rapport d'avancement/database/Face_DataBase/Raw_DataBase/*" /tmp/database/Face_DataBase
In [ ]:
from CHAMP.DataLoader import LoadData
from CHAMP.DataTools import LocalContrastNormalization, FilterInputData, GenerateMask
from CHAMP.Monitor import DisplayDico, DisplayConvergenceCHAMP, DisplayWhere

import os
datapath = os.path.join("/tmp", "database")
path = os.path.join(datapath, "Face_DataBase")
TrSet, TeSet = LoadData('Face', path, decorrelate=False, resize=(65, 65))

# MP Parameters
nb_dico = 20
width = 9
dico_size = (width, width)
l0 = 20
seed = 42
# Learning Parameters
eta = .05
nb_epoch = 500

TrSet, TeSet = LoadData('Face', path, decorrelate=False, resize=(65, 65))
N_TrSet, _, _, _ = LocalContrastNormalization(TrSet)
Filtered_L_TrSet = FilterInputData(
    N_TrSet, sigma=0.25, style='Custom', start_R=15)

mask = GenerateMask(full_size=(nb_dico, 1, width, width), sigma=0.8, style='Gaussian')

from CHAMP.CHAMP_Layer import CHAMP_Layer

from CHAMP.DataTools import SaveNetwork, LoadNetwork
homeo_methods = ['None', 'HAP']

for homeo_method, eta_homeo  in zip(homeo_methods, [0., 0.0025]):
    ffname = 'cache_dir_CNN/CHAMP_low_' + homeo_method + '.pkl'
    try:
        L1_mask = LoadNetwork(loading_path=ffname)
    except:
        L1_mask = CHAMP_Layer(l0_sparseness=l0, nb_dico=nb_dico,
                          dico_size=dico_size, mask=mask, verbose=1)
        dico_mask = L1_mask.TrainLayer(
            Filtered_L_TrSet, eta=eta, eta_homeo=eta_homeo, nb_epoch=nb_epoch, seed=seed)
        SaveNetwork(Network=L1_mask, saving_path=ffname)

panel A: plotting some dictionaries

In [ ]:
pname = '/tmp/panel_A' #pname = fname + '_A'
subplotpars = dict( left=0.042, right=1., bottom=0., top=1., wspace=0.05, hspace=0.05,) fig, axs = plt.subplots(2, 1, figsize=(fig_width/2, fig_width/(1+phi)), gridspec_kw=subplotpars) for ax, color, homeo_method in zip(axs.ravel(), ['black', 'green'], homeo_methods): ax.axis(c=color, lw=2, axisbg='w') ax.set_facecolor('w') ffname = 'cache_dir/CHAMP_low_' + homeo_method + '.pkl' L1_mask = LoadNetwork(loading_path=ffname) fig, ax = DisplayDico(L1_mask.dictionary, fig=fig, ax=ax) # ax.set_ylabel(homeo_method) ax.text(-8, 7*dim_graph[0], homeo_method, fontsize=12, color=color, rotation=90)#, backgroundcolor='white' for ext in FORMATS: fig.savefig(pname + ext, dpi=dpi_export, bbox_inches='tight')
In [ ]:
subplotpars = dict(left=0.042, right=1., bottom=0., top=1., wspace=0.05, hspace=0.05,)

for color, homeo_method in zip(['black', 'green'], homeo_methods): 
    #fig, axs = plt.subplots(1, 1, figsize=(fig_width/2, fig_width/(1+phi)), gridspec_kw=subplotpars)
    ffname = 'cache_dir_CNN/CHAMP_low_' + homeo_method + '.pkl'
    L1_mask = LoadNetwork(loading_path=ffname)
    fig, ax = DisplayDico(L1_mask.dictionary)
    # ax.set_ylabel(homeo_method)
    #for ax in list(axs):
    #    ax.axis(c=color, lw=2, axisbg='w')
    #    ax.set_facecolor('w')
    ax[0].text(-4, 3, homeo_method, fontsize=8, color=color, rotation=90)#, backgroundcolor='white'
    plt.tight_layout( pad=0., w_pad=0., h_pad=.0)


    for ext in FORMATS: fig.savefig(pname + '_' + homeo_method + ext, dpi=dpi_export, bbox_inches='tight')

panel B: quantitative comparison

In [ ]:
pname = '/tmp/panel_B' #fname + '_B'
from shl_scripts import time_plot variable = 'F' alpha = .3 subplotpars = dict(left=0.2, right=.95, bottom=0.2, top=.95)#, wspace=0.05, hspace=0.05,) fig, axs = plt.subplots(2, 1, figsize=(fig_width/2, fig_width/(1+phi)), gridspec_kw=subplotpars) for ax, color, homeo_method in zip(axs, ['black', 'green'], homeo_methods): print(ax, axs) ffname = 'cache_dir_CNN/CHAMP_low_' + homeo_method + '.pkl' L1_mask = LoadNetwork(loading_path=ffname) fig, ax = DisplayConvergenceCHAMP(L1_mask, to_display=['histo'], fig=fig, ax=ax) ax.axis(c=color, lw=2, axisbg='w') ax.set_facecolor('w') # ax.set_ylabel(homeo_method) #ax.text(-8, 7*dim_graph[0], homeo_method, fontsize=12, color=color, rotation=90)#, backgroundcolor='white' for ext in FORMATS: fig.savefig(pname + ext, dpi=dpi_export, bbox_inches='tight') if DEBUG: Image(pname +'.png')
In [ ]:
from shl_scripts import time_plot
variable = 'F'
alpha = .3
subplotpars = dict(left=0.2, right=.95, bottom=0.2, top=.95)#, wspace=0.05, hspace=0.05,)

for color, homeo_method in zip(['black', 'green'], homeo_methods): 
    #fig, axs = plt.subplots(1, 1, figsize=(fig_width/2, fig_width/(1+phi)), gridspec_kw=subplotpars)
    ffname = 'cache_dir_CNN/CHAMP_low_' + homeo_method + '.pkl'
    L1_mask = LoadNetwork(loading_path=ffname)
    fig, ax = DisplayConvergenceCHAMP(L1_mask, to_display=['histo'], color=color)
    ax.axis(c=color, lw=2, axisbg='w')
    ax.set_facecolor('w')
    ax.set_ylabel('counts')
    ax.set_xlabel('feature #')
    ax.set_ylim(0, 560)
    #ax.text(-8, 7*dim_graph[0], homeo_method, fontsize=12, color=color, rotation=90)#, backgroundcolor='white'
    #ax[0].text(-8, 3, homeo_method, fontsize=12, color=color, rotation=90)#, backgroundcolor='white'
    
    for ext in FORMATS: fig.savefig(pname + '_' + homeo_method + ext, dpi=dpi_export, bbox_inches='tight')
    if DEBUG: Image(pname +'.png')    

Montage of the subplots

In [ ]:
%ls -ltr /tmp/panel_*
In [ ]:
%%tikz -f pdf --save {fname}.pdf
\draw[white, fill=white] (0.\linewidth,0) rectangle (1.\linewidth, .382\linewidth) ;
\draw [anchor=north west] (.0\linewidth, .375\linewidth) node {\includegraphics[width=.95\linewidth]{/tmp/panel_A_None}};
\draw [anchor=north west] (.0\linewidth, .300\linewidth) node {\includegraphics[width=.95\linewidth]{/tmp/panel_A_HAP}};
\draw [anchor=north west] (.0\linewidth, .191\linewidth) node {\includegraphics[width=.45\linewidth]{/tmp/panel_B_None}};
\draw [anchor=north west] (.5\linewidth, .191\linewidth) node {\includegraphics[width=.45\linewidth]{/tmp/panel_B_HAP}};
\begin{scope}[font=\bf\sffamily\large]
%\draw [anchor=west,fill=white] (.0\linewidth, .382\linewidth) node [above right=-3mm] {$\mathsf{A}$};
\draw [anchor=west,fill=white] (.0\linewidth, .191\linewidth) node [above right=-3mm] {$\mathsf{A}$};
\draw [anchor=west,fill=white] (.53\linewidth, .191\linewidth) node [above right=-3mm] {$\mathsf{B}$};
\end{scope}
In [ ]:
!convert  -density {dpi_export} {fname}.pdf {fname}.jpg
!convert  -density {dpi_export} {fname}.pdf {fname}.png
#!convert  -density {dpi_export} -resize 5400  -units pixelsperinch -flatten  -compress lzw  -depth 8 {fname}.pdf {fname}.tiff
Image(fname +'.png')
!echo "width=" ; convert {fname}.tiff -format "%[fx:w]" info: !echo ", \nheight=" ; convert {fname}.tiff -format "%[fx:h]" info: !echo ", \nunit=" ; convert {fname}.tiff -format "%U" info:!identify {fname}.tiff

coding

The learning itself is done via a gradient descent but is highly dependent on the coding / decoding algorithm. This belongs to a another function (in the shl_encode.py script)

Supplementary controls

starting a learning

In [ ]:
shl = SHL(**opts)
list_figures = ['show_dico', 'show_Pcum', 'time_plot_F']
dico = shl.learn_dico(data=data, list_figures=list_figures, matname=tag + '_vanilla')
In [ ]:
print('size of dictionary = (number of filters, size of imagelets) = ', dico.dictionary.shape)
print('average of filters = ',  dico.dictionary.mean(axis=1).mean(), 
      '+/-',  dico.dictionary.mean(axis=1).std())
SE = np.sqrt(np.sum(dico.dictionary**2, axis=1))
print('average energy of filters = ', SE.mean(), '+/-', SE.std())

getting help

In [ ]:
help(shl)
In [ ]:
help(dico)

loading a database

Loading patches, with or without mask:

In [ ]:
N_patches = 12
from shl_scripts.shl_tools import show_data
opts_ = opts.copy()
opts_.update(verbose=0)
for i, (do_mask, label) in enumerate(zip([False, True], ['Without mask', 'With mask'])):
    data_ = SHL(DEBUG_DOWNSCALE=1, N_patches=N_patches, n_image=1, do_mask=do_mask, seed=seed, **opts_).get_data()
    fig, axs = show_data(data_)
    axs[0].set_ylabel(label);
    plt.show()

Testing different algorithms

In [ ]:
fig, ax = None, None

for homeo_method in ['None', 'HAP']:
    for algorithm in ['lasso_lars', 'lars', 'elastic', 'omp', 'mp']: # 'threshold', 'lasso_cd', 
        opts_ = opts.copy()
        opts_.update(homeo_method=homeo_method, learning_algorithm=algorithm, verbose=0)
        shl = SHL(**opts_)
        dico= shl.learn_dico(data=data, list_figures=[],
                       matname=tag + ' - algorithm={}'.format(algorithm) + ' - homeo_method={}'.format(homeo_method))
        fig, ax = shl.time_plot(dico, variable='F', fig=fig, ax=ax, label=algorithm +'_' + homeo_method)

    ax.legend()

Testing two different dictionary initalization strategies

White Noise Initialization + Learning

In [ ]:
shl = SHL(one_over_F=False, **opts)
dico_w = shl.learn_dico(data=data, matname=tag + '_WHITE', list_figures=[])
shl = SHL(one_over_F=True, **opts)
dico_1oF = shl.learn_dico(data=data, matname=tag + '_OVF', list_figures=[])
fig_error, ax_error = None, None
fig_error, ax_error = shl.time_plot(dico_w, variable='F', fig=fig_error, ax=ax_error, color='blue', label='white noise')
fig_error, ax_error = shl.time_plot(dico_1oF, variable='F', fig=fig_error, ax=ax_error, color='red', label='one over f')
#ax_error.set_ylim((0, .65))
ax_error.legend(loc='best')

Testing two different learning rates strategies

We use by defaut the strategy of ADAM, see https://arxiv.org/pdf/1412.6980.pdf

In [ ]:
shl = SHL(beta1=0., **opts)
dico_fixed = shl.learn_dico(data=data, matname=tag + '_fixed', list_figures=[])
shl = SHL(**opts)
dico_default = shl.learn_dico(data=data, matname=tag + '_default', list_figures=[])
fig_error, ax_error = None, None
fig_error, ax_error = shl.time_plot(dico_fixed, variable='F', fig=fig_error, ax=ax_error, color='blue', label='fixed')
fig_error, ax_error = shl.time_plot(dico_default, variable='F', fig=fig_error, ax=ax_error, color='red', label='ADAM')
#ax_error.set_ylim((0, .65))
ax_error.legend(loc='best')

Testing different number of neurons and sparsity

As suggested by AnonReviewer3, we have tested how the convergence was modified by changing the number of neurons. By comparing different numbers of neurons we could re-draw the same figures for the convergence of the algorithm as in our original figures. In addition, we have also checked that this result will hold on a range of sparsity levels. In particular, we found that in general, increasing the l0_sparseness parameter, the convergence took progressively longer. Importantly, we could see that in both cases, this did not depend on the kind of homeostasis heuristic chosen, proving the generality of our results.

This is shown in the supplementary material that we have added to our revision ("Testing different number of neurons and sparsity") . This useful extension proves the originality of our work as highlighted in point 4, and the generality of these results compared to the parameters of the network.

In [ ]:
from shl_scripts.shl_experiments import SHL_set
homeo_methods = ['None', 'OLS', 'HEH']
homeo_methods = ['None', 'EMP', 'HAP', 'HEH', 'OLS']

variables = ['l0_sparseness', 'n_dictionary']
list_figures = []

#n_dictionary=21**2

for homeo_method in homeo_methods:
    opts_ = opts.copy()
    opts_.update(homeo_method=homeo_method, datapath=datapath)
    experiments = SHL_set(opts_, tag=tag + '_' + homeo_method)
    experiments.run(variables=variables, n_jobs=1, verbose=0)

fig, axs = plt.subplots(len(variables), 1, figsize=(fig_width/2, fig_width/(1+phi)), gridspec_kw=subplotpars, sharey=True)

for i_ax, variable in enumerate(variables):
    for color, homeo_method in zip(colors, homeo_methods): 
        opts_ = opts.copy()
        opts_.update(homeo_method=homeo_method, datapath=datapath)
        experiments = SHL_set(opts_, tag=tag + '_' + homeo_method)
        fig, axs[i_ax] = experiments.scan(variable=variable, list_figures=[], display='final', fig=fig, ax=axs[i_ax], color=color, display_variable='F', verbose=0) #, label=homeo_metho
        axs[i_ax].set_xlabel('') #variable
        axs[i_ax].text(.1, .8,  variable, transform=axs[i_ax].transAxes) 
        #axs[i_ax].get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())

Perspectives

Convolutional neural networks

In [ ]:
from CHAMP.DataLoader import LoadData
from CHAMP.DataTools import LocalContrastNormalization, FilterInputData, GenerateMask
from CHAMP.Monitor import DisplayDico, DisplayConvergenceCHAMP, DisplayWhere

import os
home = os.getenv('HOME')
datapath = os.path.join("/tmp", "database")
path = os.path.join(datapath, "Raw_DataBase")
TrSet, TeSet = LoadData('Face', path, decorrelate=False, resize=(65, 65))
to_display = TrSet[0][0, 0:10, :, :, :]
print('Size=', TrSet[0].shape)
DisplayDico(to_display)

Training on a face database

In [ ]:
# MP Parameters
nb_dico = 20
width = 9
dico_size = (width, width)
l0 = 20
seed = 42
# Learning Parameters
eta = .05
nb_epoch = 500

TrSet, TeSet = LoadData('Face', path, decorrelate=False, resize=(65, 65))
N_TrSet, _, _, _ = LocalContrastNormalization(TrSet)
Filtered_L_TrSet = FilterInputData(
    N_TrSet, sigma=0.25, style='Custom', start_R=15)
to_display = Filtered_L_TrSet[0][0, 0:10, :, :, :]
DisplayDico(to_display)

mask = GenerateMask(full_size=(nb_dico, 1, width, width), sigma=0.8, style='Gaussian')
DisplayDico(mask)

Training the ConvMP Layer with homeostasis

In [ ]:
from CHAMP.CHAMP_Layer import CHAMP_Layer

from CHAMP.DataTools import SaveNetwork, LoadNetwork
fname = 'cache_dir_CNN/CHAMP_low_None.pkl'
try:
    L1_mask = LoadNetwork(loading_path=fname)
except:
    L1_mask = CHAMP_Layer(l0_sparseness=l0, nb_dico=nb_dico,
                      dico_size=dico_size, mask=mask, verbose=2)
    dico_mask = L1_mask.TrainLayer(
        Filtered_L_TrSet, eta=eta, nb_epoch=nb_epoch, seed=seed)
    SaveNetwork(Network=L1_mask, saving_path=fname)

DisplayDico(L1_mask.dictionary)
DisplayConvergenceCHAMP(L1_mask, to_display=['error', 'histo'])
DisplayWhere(L1_mask.where)

Training the ConvMP Layer with homeostasis

In [ ]:
fname = 'cache_dir_CNN/CHAMP_low_HAP.pkl'
try:
    L1_mask = LoadNetwork(loading_path=fname)
except:

    # Learning Parameters
    eta_homeo = 0.0025
    L1_mask = CHAMP_Layer(l0_sparseness=l0, nb_dico=nb_dico,
                          dico_size=dico_size, mask=mask, verbose=1)
    dico_mask = L1_mask.TrainLayer(
        Filtered_L_TrSet, eta=eta, eta_homeo=eta_homeo, nb_epoch=nb_epoch, seed=seed)
    SaveNetwork(Network=L1_mask, saving_path=fname)

DisplayDico(L1_mask.dictionary)
DisplayConvergenceCHAMP(L1_mask, to_display=['error'])
DisplayConvergenceCHAMP(L1_mask, to_display=['histo'])
DisplayWhere(L1_mask.where)

Reconstructing the input image

In [ ]:
from CHAMP.DataTools import Rebuilt
import torch
rebuilt_image = Rebuilt(torch.FloatTensor(L1_mask.code), L1_mask.dictionary)
DisplayDico(rebuilt_image[0:10, :, :, :]);

Training the ConvMP Layer with higher-level filters

We train higher-level feature vectors by forcing the network to :

  • learn bigger filters,
  • represent the information using a bigger dictionary (higher sparseness)
  • represent the information with less features (higher sparseness)
In [ ]:
fname = 'cache_dir_CNN/CHAMP_high_None.pkl'
try:
    L1_mask = LoadNetwork(loading_path=fname)
except:

    nb_dico = 60
    width = 19
    dico_size = (width, width)
    l0 = 5
    mask = GenerateMask(full_size=(nb_dico, 1, width, width), sigma=0.8, style='Gaussian')
    # Learning Parameters
    eta_homeo = 0.0
    eta = .05
    nb_epoch = 500
    # learn
    L1_mask = CHAMP_Layer(l0_sparseness=l0, nb_dico=nb_dico,
                          dico_size=dico_size, mask=mask, verbose=0)
    dico_mask = L1_mask.TrainLayer(
        Filtered_L_TrSet, eta=eta, eta_homeo=eta_homeo, nb_epoch=nb_epoch, seed=seed)
    SaveNetwork(Network=L1_mask, saving_path=fname)


DisplayDico(L1_mask.dictionary)
DisplayConvergenceCHAMP(L1_mask, to_display=['error'])
DisplayConvergenceCHAMP(L1_mask, to_display=['histo'])
DisplayWhere(L1_mask.where);
In [ ]:
fname = 'cache_dir_CNN/CHAMP_high_HAP.pkl'
try:
    L1_mask = LoadNetwork(loading_path=fname)
except:

    nb_dico = 60
    width = 19
    dico_size = (width, width)
    l0 = 5
    mask = GenerateMask(full_size=(nb_dico, 1, width, width), sigma=0.8, style='Gaussian')
    # Learning Parameters
    eta_homeo = 0.0025
    eta = .05
    nb_epoch = 500
    # learn
    L1_mask = CHAMP_Layer(l0_sparseness=l0, nb_dico=nb_dico,
                          dico_size=dico_size, mask=mask, verbose=0)
    dico_mask = L1_mask.TrainLayer(
        Filtered_L_TrSet, eta=eta, eta_homeo=eta_homeo, nb_epoch=nb_epoch, seed=seed)
    SaveNetwork(Network=L1_mask, saving_path=fname)

DisplayDico(L1_mask.dictionary)
DisplayConvergenceCHAMP(L1_mask, to_display=['error'])
DisplayConvergenceCHAMP(L1_mask, to_display=['histo'])
DisplayWhere(L1_mask.where);

Training on MNIST database

fname = 'cache_dir_CNN/CHAMP_MNIST_HAP.pkl' try: L1_mask = LoadNetwork(loading_path=fname) except: path = os.path.join(datapath, "MNISTtorch") TrSet, TeSet = LoadData('MNIST', data_path=path) N_TrSet, _, _, _ = LocalContrastNormalization(TrSet) Filtered_L_TrSet = FilterInputData( N_TrSet, sigma=0.25, style='Custom', start_R=15) nb_dico = 60 width = 7 dico_size = (width, width) l0 = 15 # Learning Parameters eta_homeo = 0.0025 eta = .05 nb_epoch = 500 # learn L1_mask = CHAMP_Layer(l0_sparseness=l0, nb_dico=nb_dico, dico_size=dico_size, mask=mask, verbose=2) dico_mask = L1_mask.TrainLayer( Filtered_L_TrSet, eta=eta, eta_homeo=eta_homeo, nb_epoch=nb_epoch, seed=seed) SaveNetwork(Network=L1_mask, saving_path=fname) DisplayDico(L1_mask.dictionary) DisplayConvergenceCHAMP(L1_mask, to_display=['error']) DisplayConvergenceCHAMP(L1_mask, to_display=['histo']) DisplayWhere(L1_mask.where);

Computational details

caching simulation data

In [ ]:
!ls -l {shl.cache_dir}/{tag}*
#!rm {shl.cache_dir}/{tag}*lock*
#!rm {shl.cache_dir}/{tag}*
#!ls -l {shl.cache_dir}/{tag}*
In [ ]:
%run model.py {tag} 0
%run model.py 35

Version used

In [40]:
%load_ext watermark
%watermark -i -h -m -v -p numpy,matplotlib,shl_scripts
2019-06-25T16:41:17+02:00

CPython 3.7.3
IPython 7.5.0

numpy 1.16.4
matplotlib 3.1.0
shl_scripts 20171221

compiler   : Clang 10.0.1 (clang-1001.0.46.3)
system     : Darwin
release    : 18.6.0
machine    : x86_64
processor  : i386
CPU cores  : 36
interpreter: 64bit
host name  : fortytwo

exporting the notebook

In [47]:
!jupyter nbconvert --to html_embed Annex.ipynb --output=index.html
[NbConvertApp] Converting notebook Annex.ipynb to html_embed
/usr/local/lib/python3.7/site-packages/nbconvert/filters/datatypefilter.py:41: UserWarning: Your element with mimetype(s) dict_keys(['image/pdf']) is not able to be represented.
  mimetypes=output.keys())
[NbConvertApp] Writing 5721758 bytes to index.html
In [42]:
#!jupyter-nbconvert --template report --to pdf Annex.ipynb
In [43]:
#!pandoc Annex.html -o Annex.pdf
In [44]:
#!/Applications/Chromium.app/Contents/MacOS/Chromium --headless --disable-gpu --print-to-pdf=Annex.pdf file:///tmp/Annex.html
In [45]:
#!zip Annex.zip Annex.html

version control

In [46]:
!git status
On branch master
Your branch is ahead of 'origin/master' by 1 commit.
  (use "git push" to publish your local commits)

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)

	modified:   Annex.ipynb
	modified:   figure_HAP.pdf
	modified:   figure_HEH.pdf
	modified:   figure_map.pdf
	modified:   hulk.tex
	modified:   model.py

Untracked files:
  (use "git add <file>..." to include in what will be committed)

	index.html

no changes added to commit (use "git add" and/or "git commit -a")
In [ ]:
!git pull
In [ ]:
!git commit -am' {tag} : re-running notebooks' 
In [ ]:
!git push

Done. Thanks for your attention!